Goto

Collaborating Authors

 logit model


How Market Volatility Shapes Algorithmic Collusion: A Comparative Analysis of Learning-Based Pricing Algorithms

Sravon, Aheer, Ibrahim, Md., Mazumder, Devdyuti, Aziz, Ridwan Al

arXiv.org Artificial Intelligence

The rapid diffusion of autonomous pricing algorithms has reshaped competitive dynamics in digital marketplaces, raising important economic and policy questions about their potential for collusive behavior. A substantial body of research demonstrates that reinforcement-learning (RL) agents can autonomously coordinate on supracompetitive outcomes even in the absence of explicit communication. Foundational contributions--including the work in [1]--show that algorithmic agents may systematically learn tacitly collusive strategies across multiple market structures, with Q-learning in particular generating prices above competitive levels in Logit, Hotelling, and linear demand environments. These concerns are reinforced by seminal work such as [2], which demonstrates that simple Q-learning agents reliably sustain collusion through structured punishment and reward cycles in repeated pricing games, as well as by [3], who document how algorithmic systems may generate sudden price spikes in response to high-impact, low-probability events (HILP), unintentionally coordinating on elevated prices. The study of [4] establishes a robust empirical and computational foundation demonstrating that pricing algorithms may autonomously learn to collude. A complementary line of research focuses specifically on Q-learning's capacity to learn collusive equilibria, as documented in papers [2], [5], and [6]. These findings are consistent with the theoretical properties of Q-learning established by [7], who show that the algorithm incrementally learns long-run discounted value-maximizing strategies in sequential decision problems. More recent studies further reveal that deep reinforcement-learning (deep RL) algorithms--including DDQN and SAC--may also display collusive tendencies. For instance, [8] documents that modern RL systems can coordinate on higher-than-competitive prices under a variety of market configurations.


Automatic Piecewise Linear Regression for Predicting Student Learning Satisfaction

Choi, Haemin, Nadarajan, Gayathri

arXiv.org Artificial Intelligence

Although student learning satisfaction has been widely studied, modern techniques such as interpretable machine learning and neural networks have not been sufficiently explored. This study demonstrates that a recent model that combines boosting with interpretability, automatic piecewise linear regression(APLR), offers the best fit for predicting learning satisfaction among several state-of-the-art approaches. Through the analysis of APLR's numerical and visual interpretations, students' time management and concentration abilities, perceived helpfulness to classmates, and participation in offline courses have the most significant positive impact on learning satisfaction. Surprisingly, involvement in creative activities did not positively affect learning satisfaction. Moreover, the contributing factors can be interpreted on an individual level, allowing educators to customize instructions according to student profiles.



Graph neural networks for residential location choice: connection to classical logit models

Cheng, Zhanhong, Hu, Lingqian, Bu, Yuheng, Zhou, Yuqi, Wang, Shenhao

arXiv.org Machine Learning

Researchers have adopted deep learning for classical discrete choice analysis as it can capture complex feature relationships and achieve higher predictive performance. However, the existing deep learning approaches cannot explicitly capture the relationship among choice alternatives, which has been a long-lasting focus in classical discrete choice models. To address the gap, this paper introduces Graph Neural Network (GNN) as a novel framework to analyze residential location choice. The GNN-based discrete choice models (GNN-DCMs) offer a structured approach for neural networks to capture dependence among spatial alternatives, while maintaining clear connections to classical random utility theory. Theoretically, we demonstrate that the GNN-DCMs incorporate the nested logit (NL) model and the spatially correlated logit (SCL) model as two specific cases, yielding novel algorithmic interpretation through message passing among alternatives' utilities. Empirically, the GNN-DCMs outperform benchmark MNL, SCL, and feedforward neural networks in predicting residential location choices among Chicago's 77 community areas. Regarding model interpretation, the GNN-DCMs can capture individual heterogeneity and exhibit spatially-aware substitution patterns. Overall, these results highlight the potential of GNN-DCMs as a unified and expressive framework for synergizing discrete choice modeling and deep learning in the complex spatial choice contexts.


Neural Collapse in Cumulative Link Models for Ordinal Regression: An Analysis with Unconstrained Feature Model

Ma, Chuang, Obuchi, Tomoyuki, Tanaka, Toshiyuki

arXiv.org Machine Learning

A phenomenon known as ''Neural Collapse (NC)'' in deep classification tasks, in which the penultimate-layer features and the final classifiers exhibit an extremely simple geometric structure, has recently attracted considerable attention, with the expectation that it can deepen our understanding of how deep neural networks behave. The Unconstrained Feature Model (UFM) has been proposed to explain NC theoretically, and there emerges a growing body of work that extends NC to tasks other than classification and leverages it for practical applications. In this study, we investigate whether a similar phenomenon arises in deep Ordinal Regression (OR) tasks, via combining the cumulative link model for OR and UFM. We show that a phenomenon we call Ordinal Neural Collapse (ONC) indeed emerges and is characterized by the following three properties: (ONC1) all optimal features in the same class collapse to their within-class mean when regularization is applied; (ONC2) these class means align with the classifier, meaning that they collapse onto a one-dimensional subspace; (ONC3) the optimal latent variables (corresponding to logits or preactivations in classification tasks) are aligned according to the class order, and in particular, in the zero-regularization limit, a highly local and simple geometric relationship emerges between the latent variables and the threshold values. We prove these properties analytically within the UFM framework with fixed threshold values and corroborate them empirically across a variety of datasets. We also discuss how these insights can be leveraged in OR, highlighting the use of fixed thresholds.


Demand Estimation with Text and Image Data

Compiani, Giovanni, Morozov, Ilya, Seiler, Stephan

arXiv.org Artificial Intelligence

We propose a demand estimation method that leverages unstructured text and image data to infer substitution patterns. Using pre-trained deep learning models, we extract embeddings from product images and textual descriptions and incorporate them into a random coefficients logit model. This approach enables researchers to estimate demand even when they lack data on product attributes or when consumers value hard-to-quantify attributes, such as visual design or functional benefits. Using data from a choice experiment, we show that our approach outperforms standard attribute-based models in counterfactual predictions of consumers' second choices. We also apply it across 40 product categories on Amazon and consistently find that text and image data help identify close substitutes within each category.


Designing Graph Convolutional Neural Networks for Discrete Choice with Network Effects

Villarraga, Daniel F., Daziano, Ricardo A.

arXiv.org Machine Learning

We introduce a novel model architecture that incorporates network effects into discrete choice problems, achieving higher predictive performance than standard discrete choice models while offering greater interpretability than general-purpose flexible model classes. Econometric discrete choice models aid in studying individual decision-making, where agents select the option with the highest reward from a discrete set of alternatives. Intuitively, the utility an individual derives from a particular choice depends on their personal preferences and characteristics, the attributes of the alternative, and the value their peers assign to that alternative or their previous choices. However, most applications ignore peer influence, and models that do consider peer or network effects often lack the flexibility and predictive performance of recently developed approaches to discrete choice, such as deep learning. We propose a novel graph convolutional neural network architecture to model network effects in discrete choices, achieving higher predictive performance than standard discrete choice models while retaining the interpretability necessary for inference--a quality often lacking in general-purpose deep learning architectures. We evaluate our architecture using revealed commuting choice data, extended with travel times and trip costs for each travel mode for work-related trips in New York City, as well as 2016 U.S. election data aggregated by county, to test its performance on datasets with highly imbalanced classes. Given the interpretability of our models, we can estimate relevant economic metrics, such as the value of travel time savings in New York City. Finally, we compare the predictive performance and behavioral insights from our architecture to those derived from traditional discrete choice and general-purpose deep learning models.


Incorporating graph neural network into route choice model

Ma, Yuxun, Seo, Toru

arXiv.org Artificial Intelligence

Route choice models are one of the most important foundations for transportation research. Traditionally, theory-based models have been utilized for their great interpretability, such as logit models and Recursive logit models. More recently, machine learning approaches have gained attentions for their better prediction accuracy. In this study, we propose novel hybrid models that integrate the Recursive logit model with Graph Neural Networks (GNNs) to enhance both predictive performance and model interpretability. To the authors' knowldedge, GNNs have not been utilized for route choice modeling, despite their proven effectiveness in capturing road network features and their widespread use in other transportation research areas. We mathematically show that our use of GNN is not only beneficial for enhancing the prediction performance, but also relaxing the Independence of Irrelevant Alternatives property without relying on strong assumptions. This is due to the fact that a specific type of GNN can efficiently capture multiple cross-effect patterns on networks from data. By applying the proposed models to one-day travel trajectory data in Tokyo, we confirmed their higher prediction accuracy compared to the existing models.


Outer Approximation and Super-modular Cuts for Constrained Assortment Optimization under Mixed-Logit Model

Pham, Hoang Giang, Mai, Tien

arXiv.org Artificial Intelligence

In this paper, we study the assortment optimization problem under the mixed-logit customer choice model. While assortment optimization has been a major topic in revenue management for decades, the mixed-logit model is considered one of the most general and flexible approaches for modeling and predicting customer purchasing behavior. Existing exact methods have primarily relied on mixed-integer linear programming (MILP) or second-order cone (CONIC) reformulations, which allow for exact problem solving using off-the-shelf solvers. However, these approaches often suffer from weak continuous relaxations and are slow when solving large instances. Our work addresses the problem by focusing on components of the objective function that can be proven to be monotonically super-modular and convex. This allows us to derive valid cuts to outer-approximate the nonlinear objective functions. We then demonstrate that these valid cuts can be incorporated into Cutting Plane or Branch-and-Cut methods to solve the problem exactly. Extensive experiments show that our approaches consistently outperform previous methods in terms of both solution quality and computation time.


Model Interpretation and Explainability: Towards Creating Transparency in Prediction Models

Kridel, Donald, Dineen, Jacob, Dolk, Daniel, Castillo, David

arXiv.org Artificial Intelligence

Model explainability and interpretability are now Explainable AI (XAI) has a counterpart in analytical being perceived as desirable, if not required, features modeling which we refer to as model explainability. of data science and predictive analytics overall. Our We tackle the issue of model explainability in the objective here is to examine what these features may context of prediction models. We analyze a dataset of look like when applied to previous research we have loans from a credit card company using the following conducted in the area of econometric prediction and three steps: execute and compare four different predictive analytics [10]. We consider the domain of prediction methods, apply the best known Lending Club loan applications. For our dataset, we explainability techniques in the current literature to perform three different analyses: the model training sets to identify feature importance 1. Model Execution and Comparison. Run and (FI) (static case), and finally to cross-check whether compare four different prediction models on the the FI set holds up under "what if" prediction